# Ultra-long context reasoning
Qwen3 4B Q8 0 64k 128k 256k Context GGUF
Apache-2.0
Three quantized versions (Q8_0) of the Qwen 4B model, supporting 64K, 128K, and 256K context lengths, optimized for long-text generation and deep reasoning tasks
Large Language Model
Q
DavidAU
401
2
Qwen3 8B GGUF
Apache-2.0
An 8B-parameter large language model developed by the Qwen team, supporting ultra-long context and multilingual processing
Large Language Model
Q
lmstudio-community
39.45k
6
Qwen3 30B A3B GGUF
Apache-2.0
A large language model developed by Qwen, supporting a context length of 131,072 tokens, excelling in creative writing, role-playing, and multi-turn conversations.
Large Language Model
Q
lmstudio-community
77.06k
21
Qwen3 235B A22B GGUF
Apache-2.0
Quantized version of the 235 billion parameter large language model released by the Qwen team, supporting 131k context length and Mixture of Experts architecture
Large Language Model
Q
lmstudio-community
22.88k
10
Llama 3 1 Nemotron Ultra 253B V1
Other
A large language model derived from Meta Llama-3.1-405B-Instruct, optimized through neural architecture search technology, supporting 128K tokens context length, suitable for reasoning, dialogue, and instruction-following tasks.
Large Language Model
Transformers English

L
nvidia
21.78k
270
Featured Recommended AI Models